- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources5
- Resource Type
-
0004000001000000
- More
- Availability
-
50
- Author / Contributor
- Filter by Author / Creator
-
-
Lin, Chi-Heng (5)
-
Kaushik, Chiraag (3)
-
Muthukumar, Vidya (3)
-
Dyer, Eva L (2)
-
Khera, Amrit (2)
-
Liu, Ran (2)
-
Ma, Wenrui (2)
-
Dyer, Eva L. (1)
-
Hsu, Yen-Chang (1)
-
Hu, Bin (1)
-
Jha, Niraj (1)
-
Jin, Hongxia (1)
-
Jin, Matthew (1)
-
Jin, Matthew Y (1)
-
Shen, Yilin (1)
-
Tuli, Shikhar (1)
-
Wang, Jun-Kun (1)
-
Wibisono, Andre (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
- Filter by Editor
-
-
Ravikumar, Pradeep (1)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Lin, Chi-Heng; Kaushik, Chiraag; Dyer, Eva L.; Muthukumar, Vidya (, Journal of machine learning research)Ravikumar, Pradeep (Ed.)Data augmentation (DA) is a powerful workhorse for bolstering performance in modern machine learning. Specific augmentations like translations and scaling in computer vision are traditionally believed to improve generalization by generating new (artificial) data from the same distribution. However, this traditional viewpoint does not explain the success of prevalent augmentations in modern machine learning (e.g. randomized masking, cutout, mixup), that greatly alter the training data distribution. In this work, we develop a new theoretical framework to characterize the impact of a general class of DA on underparameterized and overparameterized linear model generalization. Our framework reveals that DA induces implicit spectral regularization through a combination of two distinct effects: a) manipulating the relative proportion of eigenvalues of the data covariance matrix in a training-data-dependent manner, and b) uniformly boosting the entire spectrum of the data covariance matrix through ridge regression. These effects, when applied to popular augmentations, give rise to a wide variety of phenomena, including discrepancies in generalization between over-parameterized and under-parameterized regimes and differences between regression and classification tasks. Our framework highlights the nuanced and sometimes surprising impacts of DA on generalization, and serves as a testbed for novel augmentation design.more » « less
-
Kaushik, Chiraag; Liu, Ran; Lin, Chi-Heng; Khera, Amrit; Jin, Matthew; Ma, Wenrui; Muthukumar, Vidya; Dyer, Eva L (, International Conference on Machine Learning)
-
Tuli, Shikhar; Lin, Chi-Heng; Hsu, Yen-Chang; Jha, Niraj; Shen, Yilin; Jin, Hongxia (, Association for Computational Linguistics)
-
Wang, Jun-Kun; Lin, Chi-Heng; Wibisono, Andre; Hu, Bin (, International Conference on Machine Learning (ICML))
An official website of the United States government

Full Text Available